
A Parametric Framework for Kernel-Based Dynamic Mode Decomposition Using Deep Learning
Please login to view abstract download link
The high-fidelity simulations of complex phenomena in science and engineering are generally computationally expensive. Such high computational cost prohibits their further application in many-query scenarios, such as uncertainty quantification, design optimization, or control, where a large number of model evaluations with parameter variations are required. Dynamic mode decomposition (DMD) is a powerful non-intrusive surrogate modelling technique that provides the linear tangent approximation of the dynamics over time. However, DMD faces challenges in approximating complex nonlinear dynamical systems and is not designed for parametric predictions. In this work, we propose a parametric framework for the kernel-based linear and nonlinear disambiguation optimization (LANDO) algorithm. The LANDO algorithm can be viewed as a generalization of DMD with kernel representation, and the proposed framework extends the LANDO algorithm for parametric predictions. It consists of an offline and an online phase. The former prepares the essential components for prediction, namely a series of LANDO surrogate models that approximate the system dynamics under different parametric configurations from a training dataset. At the online stage, the LANDO models predict the state of the system at a desired time instant, and the mapping between the parameter and the state is approximated using deep learning techniques. Additionally, we propose applying dimensionality reduction techniques to further reduce the computational cost of training for high-dimensional systems. We showcase the effectiveness of the proposed parametric framework with three numerical examples, including the Lotka-Volterra model, the heat equation, and the reaction-diffusion equation. The proposed framework achieved a high level of accuracy for all numerical examples. Moreover, the dimensionality reduction for high-dimensional systems effectively improves the computational costs for training without significantly compromising the predictive performance of the parametric framework.